To say that a system of any design is an “artificial intelligence”, we mean that it has goals which it tries to accomplish
by acting in the world.
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That’d be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.
edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.
I’m not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside.
Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization.
By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I’ve been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling:
The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html
I’m not sure that a “good enough” implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn’t optimized for human existence, but we’re doing all right at this point.
It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don’t do it because of rudimentary awareness of risks (which aren’t present today though).
Progress so far has been mostly good. We see big positive trends towards improvement. Some measure of paranoia is usually justified, but excessive paranoia is often unhelpful.
Quoting from
http://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf
I had been thinking, could it be that respected computer vision expert indeed believes that the system will just emerge world intentionality? That’d be pretty odd. Then I see it is his definition of AI here, it already presumes robust implementation of world intentionality. Which is precisely what a tool like optimizing compiler lacks.
edit: and in advance of other objection: I know evolution can produce what ever argument demands. Evolution, however, is a very messy and inefficient process for making very messy and inefficient solutions to the problems nobody has ever even defined.
I’m not sure there is a firm boundary between goals respecting events inside the computer and those respecting events outside.
Who has made this optimizing compiler claim that you attack? My impression was that AI paranoia advocates were concerned with efficient cross domain optimization, and optimizing compilers would seem to have a limited domain of optimization.
By the way, what do you think the most compelling argument in favor of AI paranoia is? Since I’ve been presenting arguments in favor of AI paranoia, here are the points against this position I can think of offhand that seem most compelling:
The simplest generally intelligent reasoning architectures could be so complicated as to be very difficult to achieve and improve, so that uploads would come first and even future supercomputers running them would improve them slowly: http://www.overcomingbias.com/2010/02/is-the-city-ularity-near.html
I’m not sure that a “good enough” implementation of human values, that was trained on a huge barrage of moral dilemmas and the solutions humans say they want implemented that were somehow sampled from a semi-rigorously defined sample space of moral dilemmas, would be that terrible. Our current universe certainly wasn’t optimized for human existence, but we’re doing all right at this point.
It seems possible that seed AI is very difficult to create by accident, in the same way an engineered virus would be very difficult to create by accident, so that for a long time it is possible for researchers to build a seed AI but they don’t do it because of rudimentary awareness of risks (which aren’t present today though).
Progress so far has been mostly good. We see big positive trends towards improvement. Some measure of paranoia is usually justified, but excessive paranoia is often unhelpful.